43 research outputs found

    The iWildCam 2019 Challenge Dataset

    Get PDF
    Camera Traps (or Wild Cams) enable the automatic collection of large quantities of image data. Biologists all over the world use camera traps to monitor biodiversity and population density of animal species. The computer vision community has been making strides towards automating the species classification challenge in camera traps, but as we try to expand the scope of these models from specific regions where we have collected training data to different areas we are faced with an interesting problem: how do you classify a species in a new region that you may not have seen in previous training data? In order to tackle this problem, we have prepared a dataset and challenge where the training data and test data are from different regions, namely The American Southwest and the American Northwest. We use the Caltech Camera Traps dataset, collected from the American Southwest, as training data. We add a new dataset from the American Northwest, curated from data provided by the Idaho Department of Fish and Game (IDFG), as our test dataset. The test data has some class overlap with the training data, some species are found in both datasets, but there are both species seen during training that are not seen during test and vice versa. To help fill the gaps in the training species, we allow competitors to utilize transfer learning from two alternate domains: human-curated images from iNaturalist and synthetic images from Microsoft's TrapCam-AirSim simulation environment

    The iWildCam 2018 Challenge Dataset

    Get PDF
    Camera traps are a valuable tool for studying biodiversity, but research using this data is limited by the speed of human annotation. With the vast amounts of data now available it is imperative that we develop automatic solutions for annotating camera trap data in order to allow this research to scale. A promising approach is based on deep networks trained on human-annotated images. We provide a challenge dataset to explore whether such solutions generalize to novel locations, since systems that are trained once and may be deployed to operate automatically in new locations would be most useful.Comment: Challenge hosted at the fifth Fine-Grained Visual Categorization Workshop (FGVC5) at CVPR 201

    The iWildCam 2019 Challenge Dataset

    Get PDF
    Camera Traps (or Wild Cams) enable the automatic collection of large quantities of image data. Biologists all over the world use camera traps to monitor biodiversity and population density of animal species. The computer vision community has been making strides towards automating the species classification challenge in camera traps, but as we try to expand the scope of these models from specific regions where we have collected training data to different areas we are faced with an interesting problem: how do you classify a species in a new region that you may not have seen in previous training data? In order to tackle this problem, we have prepared a dataset and challenge where the training data and test data are from different regions, namely The American Southwest and the American Northwest. We use the Caltech Camera Traps dataset, collected from the American Southwest, as training data. We add a new dataset from the American Northwest, curated from data provided by the Idaho Department of Fish and Game (IDFG), as our test dataset. The test data has some class overlap with the training data, some species are found in both datasets, but there are both species seen during training that are not seen during test and vice versa. To help fill the gaps in the training species, we allow competitors to utilize transfer learning from two alternate domains: human-curated images from iNaturalist and synthetic images from Microsoft's TrapCam-AirSim simulation environment

    Natural Variation in the Oxytocin Receptor Gene and Rearing Interact to Influence Reproductive and Nonreproductive Social Behavior and Receptor binding

    Get PDF
    Individual variation in social behavior offers an opportunity to explore gene-by-environment interactions that could contribute to adaptative or atypical behavioral profiles (e.g., autism spectrum disorders). Outbred, socially monogamous prairie voles provide an excellent model to experimentally explore how natural variations in rearing and genetic diversity interact to shape reproductive and nonreproductive social behavior. In this study, we manipulated rearing (biparental versus dam-only), genotyped the intronic NT213739 single nucleotide polymorphism (SNP) of the oxytocin receptor gene (Oxtr), and then assessed how each factor and their interaction related to reciprocal interactions and partner preference in male and female adult prairie voles. We found that C/T subjects reared biparentally formed more robust partner preferences than T/T subjects. In general, dam-only reared animals huddled less with a conspecific in reproductive and nonreproductive contexts, but the effect of rearing was more pronounced in T/T animals. In line with previous literature, C/T animals exhibited higher densities of oxytocin receptor (OXTR) in the striatum (caudoputamen, nucleus accumbens) compared to T/T subjects. There was also a gene-by-rearing interaction in the striatum and insula of females: In the insula, T/T females expressed varying OXTR densities depending on rearing. Overall, this study demonstrates that significant differences in adult reproductive and nonreproductive social behavior and OXTR density can arise due to natural differences in Oxtr, experimental manipulations of rearing, and their interaction

    The iWildCam 2018 Challenge Dataset

    Get PDF
    Camera traps are a valuable tool for studying biodiversity, but research using this data is limited by the speed of human annotation. With the vast amounts of data now available it is imperative that we develop automatic solutions for annotating camera trap data in order to allow this research to scale. A promising approach is based on deep networks trained on human-annotated images. We provide a challenge dataset to explore whether such solutions generalize to novel locations, since systems that are trained once and may be deployed to operate automatically in new locations would be most useful
    corecore